Hidden convexity in partially separable optimization

نویسندگان

  • Aharon Ben-Tal
  • Dick den Hertog
  • Monique Laurent
چکیده

The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some partial separable structure. Among other things, the new hidden convexity results open up the possibility to solve multi-stage robust optimization problems using certain nonlinear decision rules.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Note on Nonconvex Minimax Theorem with Separable Homogeneous Polynomials

The minimax theorem for a convex-concave bifunction is a fundamental theorem in optimization and convex analysis, and has a lot of applications in economics. In the last two decades, a nonconvex extension of this minimax theorem has been well studied under various generalized convexity assumptions. In this note, by exploiting the hidden convexity (joint range convexity) of separable homogeneous...

متن کامل

Parallel Variable Distribution for Constrained Optimization

In the parallel variable distribution framework for solving optimization problems (PVD), the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining “secondary” variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs c...

متن کامل

Convex Neural Networks

Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an ...

متن کامل

An Indirect Method of Nonconvex Variational Problems in Asplund Spaces: The Case for Saturated Measure Spaces

The purpose of this paper is to establish an existence result for nonconvex variational problems with Bochner integral constraints in separable Asplund spaces via the Euler–Lagrange inclusion, under the saturation hypothesis on measure spaces, which makes the Lyapunov convexity theorem valid in Banach spaces. The approach is based on the indirect method of the calculus of variations.

متن کامل

Genetic crossover operator for partially separable functions

Partial separation is a mathematical technique that has been used in optimization for the last 15 years. On the other hand, genetic algorithms are widely used as global optimizers. This paper investigates how partial separability can be used in conjunction with GA. In the first part of this paper, a crossover operator designed to solve partially separable global optimization problems involving ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011